Goto

Collaborating Authors

 oversight board


How the far right is weaponising AI-generated content in Europe

The Guardian

From fake images designed to cause fears of an immigrant "invasion" to other demonisation campaigns targeted at leaders such as Emmanuel Macron, far-right parties and activists across western Europe are at the forefront of the political weaponisation of generative artificial intelligence technology. This year's European parliamentary elections were the launchpad for a rollout of AI-generated campaigning by the European far right, experts say, which has continued to proliferate since. This month, the issue reached the independent oversight board of Mark Zuckerberg's Meta when the body opened an investigation into anti-immigration content on Facebook. The inquiry by the oversight board will look at a post from a German account featuring an AI-generated image emblazoned with anti-immigrant rhetoric. It is part of a wave of AI-made rightwing content on social media networks.


Meta needs updated rules for sexually explicit deepfakes, Oversight Board says

Engadget

Meta's Oversight Board is urging the company to update its rules around sexually explicit deepfakes. The board made the recommendations as part of its decision in two cases involving AI-generated images of public figures. The cases stem from two user appeals over AI-generated images of public figures, though the board declined to name the individuals. One post, which originated on Instagram, depicted a nude Indian woman. The post was reported to Meta but the report was automatically closed after 48 hours, as was a subsequent user appeal.


Meta's Oversight Board will rule on AI-generated sexual images

Engadget

Meta's Oversight Board is once again taking on the social network's rules for AI-generated content. The board has accepted two cases that deal with AI-made explicit images of public figures. While Meta's rules already prohibit nudity on Facebook and Instagram, the board said in a statement that it wants to address whether "Meta's policies and its enforcement practices are effective at addressing explicit AI-generated imagery." Sometimes referred to as "deepfake porn," AI-generated images of female celebrities, politicians and other public figures has become an increasingly prominent form of online harassment and has drawn a wave of proposed regulation. With the two cases, the Oversight Board could push Meta to adopt new rules to address such harassment on its platform.

  Country: Asia > India (0.06)
  Industry:

Meta plans to more broadly label AI-generated content

Engadget

Meta says that its current approach to labeling AI-generated content is too narrow and that it will soon apply a "Made with AI" badge to a broader range of videos, audio and images. Starting in May, it will append the label to media when it detects industry-standard AI image indicators or when users acknowledge that they're uploading AI-generated content. The company may also apply the label to posts that fact-checkers flag, though it's likely to downrank content that's been identified as false or altered. The company announced the measure in the wake of an Oversight Board decision regarding a video that was maliciously edited to depict President Joe Biden touching his granddaughter inappropriately. The Oversight Board agreed with Meta's decision not to take down the video from Facebook as it didn't violate the company's rules regarding manipulated media.


Meta to label AI-generated images shared on Facebook and Instagram - but in 'coming months' as US presidential race heats up

Daily Mail - Science & tech

Meta is introducing a tool to identify AI-generated images shared on its platforms amid a global rise in synthetic content spreading misinformation. Due to several of systems on the web, the Mark Zuckerberg-owned company is aiming to expand labels to others like Google, OpenAI, Microsoft, and Adobe. Meta said it will fully roll out the labeling feature in the coming months and plans to add a feature that lets users flag AI-generated content. However, the US presidential race is in full swing, leaving some to wonder if the labels will be out in time to stop fake content from spreading. The move comes after Meta's Oversight Board urged the company to take steps to label manipulated audio and video that could mislead users. 'The Board's recommendations go further in that it advised the company to expand the Manipulated Media policy to include audio, clearly state the harms it seeks to reduce, and begin labeling these types of posts more broadly than what was announced,' an Oversight Board spokesperson Dan Chaison told Dailymail.com.


Meta Will Crack Down on AI-Generated Fakes--but Leave Plenty Undetected

WIRED

Meta, like other leading tech companies, has spent the past year promising to speed up deployment of generative artificial intelligence. Today it acknowledged it must also respond to the technology's hazards, announcing an expanded policy of tagging AI-generated images posted to Facebook, Instagram, and Threads with warning labels to inform people of their artificial origins. Yet much of the synthetic media likely to appear on Meta's platforms is unlikely to be covered by the new policy, leaving many gaps through which malicious actors could slip. "It's a step in the right direction, but with challenges," says Sam Gregory, program director of the nonprofit Witness, which helps people use technology to support human rights. Meta already labels AI-generated images made using its own generative AI tools with the tag "Imagined with AI," in part by looking for the digital "watermark" its algorithms embed into their output.


TechScape: Why is the UK so slow to regulate AI?

The Guardian

Britain wants to lead the world in AI regulation. But AI regulation is a rapidly evolving, contested policy space in which there's little agreement over what a good outcome would look like, let alone the best methods to get there. And being the third most important hub of AI research in the world doesn't give you an awful lot of power when the first two are the US and China. How to slice through this Gordian knot? Simple: move swiftly and decisively to do … absolutely nothing.


Meta Oversight Board Warns of 'Incoherent' Rules After Fake Biden Video

TIME - Tech

Meta Platforms Inc.'s independent Oversight Board agreed with the company's recent decision to leave up a misleading video of US President Joe Biden, but criticized its policies on content generated by artificial intelligence as "incoherent" and too narrow. The board, which was set up in 2020 by management to independently review some of the company's most significant content moderation decisions, on Monday urged Meta to update its policies quickly ahead of the 2024 U.S. general election. "The Board is concerned about the manipulated media policy in its current form, finding it to be incoherent, lacking in persuasive justification and inappropriately focused on how content has been created, rather than on which specific harms it aims to prevent, such as disrupting electoral processes," the organization said in a statement. The criticism from the board came after reviewing Meta's decision to leave up a manipulated video of Biden, which was edited to make it look like he was inappropriately touching his adult granddaughter's chest. The video included a caption that referred to Biden as a "pedophile."


A Doctored Biden Video Is a Test Case for Facebook's Deepfake Policies

WIRED

During the 2022 US midterm elections, a manipulated video of President Joe Biden circulated on Facebook. The original footage showed Biden placing an "I voted" sticker on his granddaughter's chest and kissing her on the cheek. The doctored version looped the footage to make it appear he was repeatedly touching the girl, with a caption that labeled him a "pedophile." Meta left the video up. Today, the company's Oversight Board--an independent body that looks into the platform's content moderation--announced that it will review that decision, in an attempt to push Meta to address how it will handle manipulated media and election disinformation ahead of the 2024 US presidential election and more than 50 other votes to be held around the world next year.


Meta's Oversight Board will weigh in on 'altered' Facebook video of Joe Biden

Engadget

Meta's Oversight Board is set to take on a new high-profile case ahead of next year's presidential election. The board said it planned to announce a case involving a user appeal related to an "altered" video of President Joe Biden. The board didn't disclose specifics of the case, which it said would be announced formally "in the coming days," but suggested it will touch on policies that could have far-reaching implications for Meta. "In the coming days the Oversight Board will announce a new case regarding a user-appeal to remove an altered video of President Joe Biden on Facebook," the Oversight Board said in a statement. "This case will examine issues related to manipulated media on Meta's platforms and the company's policies on misinformation, especially around elections."